4,482 research outputs found

    Levetiracetam in clinical practice: efficacy and tolerability in epilepsy.

    Get PDF
    BACKGROUND: The aim of this study was to evaluate efficacy and tolerability of levetiracetam (LEV) in patients with different epilepsy syndromes. METHODS: We evaluated epileptic patients seen in the previous 18 months, including all patients with present or past exposure to LEV. Tolerability of LEV therapy was evaluated in all patients; efficacy was evaluated only in patients who had received LEV for at least six months. Two hundred and two patients were included in the study. Patients were considered responsive when showing a > 50% reduction in seizures frequency and non-responders when seizure frequency was unchanged, worsened or showed a reduction < 50%. RESULTS: Thirty patients did not complete six months of LEV treatment and dropped out. 57.4% of the patients with uncontrolled seizures treated for at least six months were responders, with 27.7% seizure free. Adverse effects were observed in 46 patients (23%) and were responsible for early drop out in 26. Adverse effects occurred significantly more often in females than in males (30.6% vs 13.2%); moreover, nearly 30% of women with adverse effects complained of more than one adverse effect, while this was never observed in male patients. CONCLUSIONS: Our study shows LEV as a well tolerated and effective treatment, both in monotherapy and as an add-on. Further investigations on larges samples are needed to investigate the issue of gender-related tolerability

    Formal Analysis of Vulnerabilities of Web Applications Based on SQL Injection (Extended Version)

    Get PDF
    We present a formal approach that exploits attacks related to SQL Injection (SQLi) searching for security flaws in a web application. We give a formal representation of web applications and databases, and show that our formalization effectively exploits SQLi attacks. We implemented our approach in a prototype tool called SQLfast and we show its efficiency on real-world case studies, including the discovery of an attack on Joomla! that no other tool can find

    Improving Recommendation Quality by Merging Collaborative Filtering and Social Relationships

    Get PDF
    Matrix Factorization techniques have been successfully applied to raise the quality of suggestions generated\ud by Collaborative Filtering Systems (CFSs). Traditional CFSs\ud based on Matrix Factorization operate on the ratings provided\ud by users and have been recently extended to incorporate\ud demographic aspects such as age and gender. In this paper we\ud propose to merge CF techniques based on Matrix Factorization\ud and information regarding social friendships in order to\ud provide users with more accurate suggestions and rankings\ud on items of their interest. The proposed approach has been\ud evaluated on a real-life online social network; the experimental\ud results show an improvement against existing CF approaches.\ud A detailed comparison with related literature is also presen

    XML Matchers: approaches and challenges

    Full text link
    Schema Matching, i.e. the process of discovering semantic correspondences between concepts adopted in different data source schemas, has been a key topic in Database and Artificial Intelligence research areas for many years. In the past, it was largely investigated especially for classical database models (e.g., E/R schemas, relational databases, etc.). However, in the latest years, the widespread adoption of XML in the most disparate application fields pushed a growing number of researchers to design XML-specific Schema Matching approaches, called XML Matchers, aiming at finding semantic matchings between concepts defined in DTDs and XSDs. XML Matchers do not just take well-known techniques originally designed for other data models and apply them on DTDs/XSDs, but they exploit specific XML features (e.g., the hierarchical structure of a DTD/XSD) to improve the performance of the Schema Matching process. The design of XML Matchers is currently a well-established research area. The main goal of this paper is to provide a detailed description and classification of XML Matchers. We first describe to what extent the specificities of DTDs/XSDs impact on the Schema Matching task. Then we introduce a template, called XML Matcher Template, that describes the main components of an XML Matcher, their role and behavior. We illustrate how each of these components has been implemented in some popular XML Matchers. We consider our XML Matcher Template as the baseline for objectively comparing approaches that, at first glance, might appear as unrelated. The introduction of this template can be useful in the design of future XML Matchers. Finally, we analyze commercial tools implementing XML Matchers and introduce two challenging issues strictly related to this topic, namely XML source clustering and uncertainty management in XML Matchers.Comment: 34 pages, 8 tables, 7 figure

    Analyzing the Facebook Friendship Graph

    Get PDF
    Online Social Networks (OSN) during last years acquired a\ud huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a signi�cant sample of data re\ud ecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities\ud between the developing of OSN and real-life social networks

    Measuring Similarity in Large-Scale Folksonomies

    Get PDF
    Social (or folksonomic) tagging has become a very popular way to describe content within Web 2.0 websites. Unlike\ud taxonomies, which overimpose a hierarchical categorisation of content, folksonomies enable end-users to freely create and choose the categories (in this case, tags) that best\ud describe some content. However, as tags are informally de-\ud fined, continually changing, and ungoverned, social tagging\ud has often been criticised for lowering, rather than increasing, the efficiency of searching, due to the number of synonyms, homonyms, polysemy, as well as the heterogeneity of\ud users and the noise they introduce. To address this issue, a\ud variety of approaches have been proposed that recommend\ud users what tags to use, both when labelling and when looking for resources. As we illustrate in this paper, real world\ud folksonomies are characterized by power law distributions\ud of tags, over which commonly used similarity metrics, including the Jaccard coefficient and the cosine similarity, fail\ud to compute. We thus propose a novel metric, specifically\ud developed to capture similarity in large-scale folksonomies,\ud that is based on a mutual reinforcement principle: that is,\ud two tags are deemed similar if they have been associated to\ud similar resources, and vice-versa two resources are deemed\ud similar if they have been labelled by similar tags. We offer an efficient realisation of this similarity metric, and assess its quality experimentally, by comparing it against cosine similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and CiteULike
    corecore